-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix initial_epoch in fine tuning step #2292
Conversation
initial_epoch for the fine tuning phase should be 1 more than history.epoch[-1], so that the history_fine.epoch would be [10, 11, ...19], a total of 10 fine tune epochs. Without the '+1', history.epoch is [0, 1, ...9], and history_fine.epoch is [9, 10, ... 19], the epoch index overlaps at 9, fine tune actually trained for 11 epochs, not 10, and the model is actually trained for 21 epochs in total (confirmed in the x axis of the combined training history curve - 21 data points)
…nitialepoch Update transfer_learning.ipynb - fix initial_epoch for fine tuning
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
PreviewPreview and run these notebook edits with Google Colab: Rendered notebook diffs available on ReviewNB.com.Format and styleUse the TensorFlow docs notebook tools to format for consistent source diffs and lint for style:$ python3 -m pip install -U --user git+https://github.com/tensorflow/docsIf commits are added to the pull request, synchronize your local branch: git pull origin master
|
Thanks! |
I added the formatting check and also went with an alternative fix for that initial_epoch issue, since if 'initial_epoch' is really how many epochs the model has previously been trained for, then the alternative fix is both conceptually and code-wise cleaner. I have both changes committed to my own repo. I see that it's already shown up in the history for this pull request above. Will this new version be reviewed as part of this pull request? or do i have to open a diff pull request? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using alternative fix for cleaner concept and code.
Added formatting check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MarkDaoust @markmcd PTAL
initial_epoch for the fine tuning phase should be 1 more than history.epoch[-1], so that the history_fine.epoch would be [10, 11, ...19], a total of 10 fine tune epochs.
In the current version, without the '+1', history.epoch is [0, 1, ...9], and history_fine.epoch is [9, 10, ... 19], the epoch index overlaps at 9, fine tune actually trained for 11 epochs, not 10, and the model is actually trained for 21 epochs in total (confirmed in the x axis of the combined training history curve - 21 data points). Training for 21 epochs in total is not the problem, but history and history_fine epoch index overlapping is a bug.
The 'initial_epoch' in model.fit() could be loosely interpreted as # of epochs the model has trained for prior. If a model has never been trained, initial_epoch=0 (default). If it's been trained for 10 epochs already, initial_epoch should be 10, which in this tutorial is history.epoch[-1]+1.